On the Generalization Ability of Online Strongly Convex Programming Algorithms
نویسندگان
چکیده
This paper examines the generalization properties of online convex programming algorithms when the loss function is Lipschitz and strongly convex. Our main result is a sharp bound, that holds with high probability, on the excess risk of the output of an online algorithm in terms of the average regret. This allows one to use recent algorithms with logarithmic cumulative regret guarantees to achieve fast convergence rates for the excess risk with high probability. As a corollary, we characterize the convergence rate of PEGASOS (with high probability), a recently proposed method for solving the SVM optimization problem.
منابع مشابه
Generalization Ability of Online Strongly Convex Learning Algorithms
Online learning, in contrast to batch learning, occurs in a sequence of rounds. At the beginning of a round, an example is presented to the learning algorithm, the learning algorithm uses its current hypothesis to label the example, and then the learning algorithm is presented with the correct label and the hypothesis is updated. It is a different learning paradigm than batch learning where we ...
متن کاملPrimal-dual path-following algorithms for circular programming
Circular programming problems are a new class of convex optimization problems that include second-order cone programming problems as a special case. Alizadeh and Goldfarb [Math. Program. Ser. A 95 (2003) 3-51] introduced primal-dual path-following algorithms for solving second-order cone programming problems. In this paper, we generalize their work by using the machinery of Euclidean Jordan alg...
متن کاملApplications of strong convexity--strong smoothness duality to learning with matrices
It is known that a function is strongly convex with respect to some norm if and only if its conjugate function is strongly smooth with respect to the dual norm. This result has already been found to be a key component in deriving and analyzing several learning algorithms. Utilizing this duality, we isolate a single inequality which seamlessly implies both generalization bounds and online regret...
متن کاملOn the Generalization Ability of Online Learning Algorithms for Pairwise Loss Functions
In this paper, we study the generalization properties of online learning based stochastic methods for supervised learning problems where the loss function is dependent on more than one training sample (e.g., metric learning, ranking). We present a generic decoupling technique that enables us to provide Rademacher complexity-based generalization error bounds. Our bounds are in general tighter th...
متن کاملOn the duality of strong convexity and strong smoothness: Learning applications and matrix regularization
We show that a function is strongly convex with respect to some norm if and only if its conjugate function is strongly smooth with respect to the dual norm. This result has already been found to be a key component in deriving and analyzing several learning algorithms. Utilizing this duality, we isolate a single inequality which seamlessly implies both generalization bounds and online regret bou...
متن کامل